AI, “black boxes” whose functioning researchers are trying to decipher

Do artificial intelligence (AI) designers have green fingers? In the spring, a post by Dario Amodei , co-founder of the artificial intelligence editor Anthropic, compared their work to the art of growing plants. You select the species, the terrain, and choose the amount of water and sunlight, carefully following the advice of the most influential botanists, in order to create “the optimal conditions to guide their shape and growth,” he observed. “But the exact structure that emerges is unpredictable,” he added, and our understanding of how it works is “poor.” The opposite of a classic computer program, whose designers can explain the mechanisms in great detail.
Another, less rustic image also often comes up among scientists to describe AI: the "black box." An analogy that amuses Thomas Fel, a French researcher specializing in their understanding, at Harvard University. "Paradoxically, they are rather transparent," he smiles, because they are entirely composed of perfectly readable numerical values.
In theory, an AI should be easier to understand than a human brain because its "neurons" are more rudimentary: they are simply small calculators storing hundreds of numerical values, telling them when to react to signals from their neighbors. Not to mention that they can be "opened and mistreated" without obtaining the patient's consent, adds Ikram Chraibi Kaadoud, a researcher in "trusted AI" at the National Institute for Research in Computer Science and Automation (Inria) at the University of Bordeaux.
Studying the behavior of neuronsYou have 78.56% of this article left to read. The rest is reserved for subscribers.
Le Monde